[FEAT] Add replay from trace strategy#620
Conversation
008633f to
a66034b
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
7f893fb to
780be20
Compare
|
It will be great to get an example of "How to get the JSONL" because i don't find solutions in litellm for example. |
|
Yeah that’s true, most frameworks won’t produce this exact JSONL directly. That’s kind of intentional. The idea here is to define a minimal, framework-agnostic canonical replay format, not something tied to a specific tracing stack. In practice, the required fields already exist almost everywhere (timestamp, input token count, output token count), just under slightly different names, so a small mapping step is usually enough. I agree it’s not the best UX on its own, but it felt like the right minimal base for the feature. Then we can iterate on top of it with helpers / converters for common sources like LiteLLM or Langfuse. And we can extend it later (e.g. optional prompt field, multiple timestamp formats, richer metadata) without breaking the core idea. But happy to adjust the direction if maintainers prefer something more opinionated or integrated from the start. |
sjmonson
left a comment
There was a problem hiding this comment.
Sorry for the silence on this. There are a few things with this PR that break other use-cases. I am still working on a more complete review but here are a few low hanging problems.
|
Thanks a lot for the detailed review, I really appreciate your time. I’m fully aligned with your feedback, especially on the replay handling in the entrypoint, which is a key part of the PR. I agree that introducing a special case here is not ideal and should be avoided. I’ll refactor this to make it cleaner and better aligned with the existing design. |
Add trace replay capability to GuideLLM for reproducing real-world request patterns from trace files. This enables time-based request rate replay and synthetic prompt generation matching trace token counts. - Add TraceReplayStrategy for scheduling requests at precise timestamps - Add ReplayProfile for configuring trace-based benchmarking - Add TraceSyntheticDatasetDeserializer for generating prompts from traces - Support max_requests truncation to limit trace length This is a minimal implementation to address issue 597. Full Mooncake format support, E2E tests, and documentation will follow in subsequent PRs. Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
- Relocate trace_io module from data/ to utils/ - Update imports in scheduler/strategies.py - Update imports in benchmark/profiles.py - Update imports in data/deserializers/trace_synthetic.py - Update imports in tests/unit/scheduler/test_trace_replay.py Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
780be20 to
7d76d5f
Compare
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
…les as sole trace row cap Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
…and docs Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
|
Hi @sjmonson, Thanks again for the feedback. I’ve completed the refactor and addressed the main review points. Key updates:
Would appreciate another look when you have time. Optional: I also put together a small Colab notebook to try the feature quickly if useful: |
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
dbutenhof
left a comment
There was a problem hiding this comment.
A couple of comments mostly around documentation consistency...
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
d87c287 to
e17eb3d
Compare
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
e17eb3d to
b6c56f3
Compare
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
|
I tried running a test with this dataset and got a deadlock at the start of the benchmark. Here is a jsonl version for testing: data.jsonl.gz. |
Thanks for testing this and for sharing the dataset. I can reproduce the issue on my side as well, including with a smaller subset of the trace. I’ll investigate it and work on a fix as soon as possible. At first glance, this seems related to how replay handles large/bursty traces and high-token-count requests. I’ll follow up once I have a clearer diagnosis and a fix. |
…generation Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
|
Hi @sjmonson First, synthetic prompt generation now builds one reusable base prompt and creates each request prompt by adding a unique prefix before slicing to the requested input length. This keeps prompts cache-resistant while avoiding the previous expensive per-request generation path. Second, trace replay is temporarily limited to one process. With multiple processes, there is currently a race condition where some scheduled requests can be consumed out of order or never sent, which leaves the benchmark waiting forever. Capping replay to one process is a workaround, but it makes the benchmark complete reliably; the only expected limitation is for extreme traces where one scheduling process may become a bottleneck. I tested this with the shared JSONL trace: the benchmark no longer hangs and starts correctly after roughly 50 seconds. On a representative subset, the previous prompt generation path was at least 10x slower... |
Yeah... I don't love this idea, but fine for now. I see the dataset generation as temporary/fallback anyways since what we want longer term is to base the tokens off of the mooncake token ids.
Also works for now. Fixing this requires a way for the dataset to inform on request scheduling so we'll scope something out. |
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
|
For multiprocessing, I think I may have found a fairly minimal approach that avoids the replay deadlock while keeping the implementation relatively clean, but I agree it’s probably better scoped for a follow-up PR. For the Mooncake token-id direction, unless I miss something, I think the current prefix invalidation approach can remain compatible with a more structure-aware strategy later on. Roughly, unrelated prompts would still receive different invalidating prefixes, while prompts sharing a common prefix could intentionally reuse the same initial invalidating block and only diverge later with unique suffix markers. Example: base prompt: prompt 1: prompt 2 (totally unrelated to prompt 1): prompt 3 (shares the same prefix structure as prompt 2 up to block D): This keeps prompts cache-resistant globally while still allowing controlled shared-prefix behavior between related requests. The same idea could likely be extended recursively for deeper shared-prefix structures. |
|
augment review |
🤖 Augment PR SummarySummary: Adds a new trace-replay benchmarking mode to reproduce real-world request arrival patterns from JSONL traces. Changes:
Technical notes: Trace rows are timestamp-sorted before scheduling and prompt generation; 🤖 Was this summary useful? React with 👍 or 👎 |
|
^ Trialling some new AI review tools. I resolved the comments that I believe don't need to be fixed but check the remaining one.
There are two main problems with this strategy: When you prepend tokens to a sequence there is no guarantee that you will end up with the same stream of tokens once you detokenize -> retokenize. Your prefix can merge with the first token of the prompt to form a different token which will cascade though the rest of the prompt. Usually this just results slight token count inaccuracies though occasionally you get a wider gap. In the past versions of GuideLLM we did something similar and could see as high as 25 tokens difference between requests token count and actual token count on a very long prompt. The severity of this issue depends on the tokenizer; many tokenizers are mostly unaffected as long as there is a space between the prefix and the prompt, others have many tokens that cross word boundaries. The other issue is that by using a single static block the bytes to tokens (and words to token) ratio is static; which can matter for all of the code around the LLM (templating, HTTP pathways, etc). Its not realistic for this ratio to be static so the test could produce biased results. I don't think these are deal-breakers to getting this code in but it is something we will probably address down the line. |
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
|
I fixed the small issue around the trace source
Thanks, this is really interesting, I had never really considered the bytes/token ratio aspect and its impact. If needed here, using a small reusable pool of diverse base prompts instead of a single static one could already help. |
Summary
replaybenchmarking strategy that reproduces real-world request patterns from trace log files (.jsonl)max_requestsandmax_secondscli options to limit the number of requests processed from a traceMotivation
This change addresses issue #597 by enabling users to benchmark their vLLM servers using real production traces. Instead of synthetic load patterns, users can now replay exact request arrival times and token distributions from their actual workloads for more realistic performance testing.
Changes
TraceReplayStrategyscheduler strategy for timestamp-based request dispatchingReplayProfileclass for configuring trace-based benchmarking parametersTraceSyntheticDatasetDeserializerto generate prompts matching trace input/output lengthsTraceReaderutility for reading .jsonl trace files with timestamp, input_length, output_length fieldsEntrypointto handle replay profile and dataset configurationmax_requestsandmax_secondstruncation support to limit trace replay lengthTesting
pytest tests/unit/scheduler/test_trace_replay.py(pass)pytest tests/unit/benchmark/test_replay_profile.py(pass)pytest tests/unit/data/deserializers/test_trace_synthetic.py(pass)Added tests: scheduling accuracy, boundary conditions, malformed trace handling, empty trace cases, max_requests truncation
test it in practice quickly with NB COLAB
Next Steps (this PR)
Out of Scope (future PRs or not)